26 research outputs found

    Semiotic Dynamics Solves the Symbol Grounding Problem

    Get PDF
    Language requires the capacity to link symbols (words, sentences) through the intermediary of internal representations to the physical world, a process known as symbol grounding. One of the biggest debates in the cognitive sciences concerns the question how human brains are able to do this. Do we need a material explanation or a system explanation? John Searle's well known Chinese Room thought experiment, which continues to generate a vast polemic literature of arguments and counter-arguments, has argued that autonomously establishing internal representations of the world (called 'intentionality' in philosophical parlance) is based on special properties of human neural tissue and that consequently an artificial system, such as an autonomous physical robot, can never achieve this. Here we study the Grounded Naming Game as a particular example of symbolic interaction and investigate a dynamical system that autonomously builds up and uses the semiotic networks necessary for performance in the game. We demonstrate in real experiments with physical robots that such a dynamical system indeed leads to a successful emergent communication system and hence that symbol grounding and intentionality can be explained in terms of a particular kind of system dynamics. The human brain has obviously the right mechanisms to participate in this kind of dynamics but the same dynamics can also be embodied in other types of physical systems

    Lexicon formation in autonomous robots

    Get PDF
    "Die Bedeutung eines Wortes ist sein Gebrauch in der Sprache". Ludwig Wittgenstein führte diese Idee in der ersten Hälfte des 20. Jahrhunderts in die Philosophie ein und in verwandten Disziplinen wie der Psychologie und Linguistik setzte sich vor allem in den letzten Jahrzehnten die Ansicht durch, dass natürliche Sprache ein dynamisches System arbiträrer und kulturell gelernter Konventionen ist. Forscher um Luc Steels übertrugen diesen Sprachbegriff seit Ende der 90er Jahre auf das Gebiet der Künstlichen Intelligenz, indem sie zunächst Software-Agenten und später Robotern mittels sogenannter Sprachspiele gemeinsame Kommunikationssysteme bilden liessen, ohne dass Agenten im Voraus mit linguistischem und konzeptionellen Wissen ausgestattet werden. Die vorliegende Arbeit knüpft an diese Forschung an und untersucht vertiefend die Selbstorganisation von geteiltem lexikalischen Wissen in humanoiden Robotern. Zentral ist dabei das Konzept der "referential uncertainty", d.h. die Schwierigkeit, die Bedeutung eines bisher unbekannten Wortes aus dem Kontext zu erschliessen. Ausgehend von sehr einfachen Modellen der Lexikonbildung untersucht die Arbeit zunächst in einer simulierten Umgebung und später mit physikalischen Robotern systematisch, wie zunehmende Komplexität kommunikativer Interaktionen komplexere Lernmodelle und Repräsentationen erfordert. Ein Ergebnis der Evaluierung der Modelle hinsichtlich Robustheit und Übertragbarkeit auf Interaktionszenarien mit Robotern ist, dass die in der Literatur vorwiegenden selektionistischen Ansätze schlecht skalieren und mit der zusätzlichen Herausforderung einer Verankerung in visuellen Perzeptionen echter Roboter nicht zurecht kommen. Davon ausgehend wird ein alternatives Modell vorgestellt."The meaning of a word is its use in the language". In the first half of the 20th century Ludwig Wittgenstein introduced this idea into philosophy and especially in the last few decades, related disciplines such as psychology and linguistics started embracing the view that that natural language is a dynamic system of arbitrary and culturally learnt conventions. From the end of the nineties on, researchers around Luc Steels transferred this notion of communication to the field of artificial intelligence by letting software agents and later robots play so-called language games in order to self-organize communication systems without requiring prior linguistic or conceptual knowledge. Continuing and advancing that research, the work presented in this thesis investigates lexicon formation in humanoid robots, i.e. the emergence of shared lexical knowledge in populations of robotic agents. Central to this is the concept of referential uncertainty, which is the difficulty of guessing a previously unknown word from the context. First in a simulated environments and later with physical robots, this work starts from very simple lexicon formation models and then systematically analyzes how an increasing complexity in communicative interactions leads to an increasing complexity of representations and learning mechanisms. We evaluate lexicon formation models with respect to their robustness, scaling and their applicability to robotic interaction scenarios and one result of this work is that the predominating approaches in the literature do not scale well and are not able to cope with the challenges stemming from grounding words in the real-world perceptions of physical robots. In order to overcome these limitations, we present an alternative lexicon formation model and evaluate its performance

    The grounded naming game

    No full text
    This chapter shows a concrete example of a language game experiment for studying the cultural evolution of one of the most basic functions of language, namely to draw attention to an object in the context by naming a characteristic feature of the object. If the object is a specific recognizable individual, then the name is called a proper name, and this is the case that is studied in this chapter. We investigate a concrete operational language strategy, with a conceptual as well a linguistic component, and show that a population of agents endowed with this strategy is able to self-organize a vocabulary of grounded proper names from scratch. The example provides a clear example of the role of alignment in stimulating self-organization and how expressive adequacy, cognitive effort, learnability, and social conformity act as selectionist forces, driving the population towards an effective language system.This research was carried out at the AI Lab of the University of Brussels (VUB) and the Sony Computer Science Laboratory in Paris.Peer Reviewe

    A tool for running experiments on the evolution of language

    No full text
    Computational and robotic research into symbolic communication systems requires sophisticated tools. This chapter introduces Babel, a tool framework that has been developed to engage in extensive repeatable multi-agent experiments including experiments with embodied robots. A brief example is presented of how experiments are configured in this framework.Peer reviewe

    Why robots

    No full text
    In this paper we offer arguments for why modeling in the field of artificial language evolution can benefit from the use of real robots. We will propose that robotic experimental setups lead to more realistic and robust models, that real-word perception can provide the basis for richer semantics and that embodiment itself can be a driving force in language evolution. We will discuss these proposals by reviewing a variety of robotic experiments that have been carried out in our group and try to argue for the relevance of the approach. 1

    The Semantics of SIT, STAND, and LIE Embodied in Robots

    No full text
    In this paper we demonstrate (1) how a group of embodied artificial agents can learn to construct abstract conceptual representations of body postures from their continuous sensorimotor interaction with the environment, (2) how they can metaphorically extend these bodily concepts to visual experiences of external objects and (3) how they can use their acquired embodied meanings for self-organizing a communication system about postures and objects. For this, we endow the agents with cognitive mechanisms and structures that are instantiations of specific ideas in cognitive linguistics (namely image schema theory) about how humans relate motor and visual space. We show that the agents are indeed able to perform well in the task and thus the experiment offers a concrete operationalization of these theories and increases their explanatory power

    XABSL -- A Pragmatic Approach to Behavior Engineering

    No full text
    This paper introduces the Extensible Agent Behavior Specification Language (XABSL) as a pragmatic tool for engineering the behavior of autonomous agents in complex and dynamic environments. It is based on hierarchies of finite state machines (FSM) for action selection and supports the design of longterm and deliberative decision processes as well as of short-term and reactive behaviors. A platform-independent execution engine makes the language applicable on any robotic platform and together with a variety of visualization, editing and debugging tools, XABSL is a convenient and powerful system for the development of complex behaviors. The complete source code can be freely downloaded from the XABSL websit
    corecore